Goto

Collaborating Authors

 approximation operator


The Representation of Meaningful Precision, and Accuracy

Mani, A

arXiv.org Artificial Intelligence

The concepts of precision, and accuracy are domain and problem dependent. The simplified numeric hard and soft measures used in the fields of statistical learning, many types of machine learning, and binary or multiclass classification problems are known to be of limited use for understanding the meaningfulness of models or their relevance. Arguably, they are neither of patterns nor proofs. Further, there are no good measures or representations for analogous concepts in the cognition domain. In this research, the key issues are reflected upon, and a compositional knowledge representation approach in a minimalist general rough framework is proposed for the problem contexts. The latter is general enough to cover most application contexts, and may be applicable in the light of improved computational tools available.


Enhancement of Approximation Spaces by the Use of Primals and Neighborhood

Güler, A. Çaksu

arXiv.org Artificial Intelligence

Rough set theory is one of the most widely used and significant approaches for handling incomplete information. It divides the universe in the beginning and uses equivalency relations to produce blocks. Numerous generalized rough set models have been put out and investigated in an effort to increase flexibility and extend the range of possible uses. We introduce four new generalized rough set models that draw inspiration from "neighborhoods and primals" in order to make a contribution to this topic. By minimizing the uncertainty regions, these models are intended to assist decision makers in more effectively analyzing and evaluating the provided data. We verify this goal by demonstrating that the existing models outperform certain current method approaches in terms of improving the approximation operators (upper and lower) and accuracy measurements. We claim that the current models can preserve nearly all significant aspects associated with the rough set model. Preserving the monotonic property, which enables us to assess data uncertainty and boost confidence in outcomes, is one of the intriguing characterizations derived from the existing models. With the aid of specific instances, we also compare the areas of the current approach. Finally, we demonstrate that the new strategy we define for our everyday health-related problem yields more accurate findings.


Representing Pedagogic Content Knowledge Through Rough Sets

Mani, A

arXiv.org Artificial Intelligence

A teacher's knowledge base consists of knowledge of mathematics content, knowledge of student epistemology, and pedagogical knowledge. It has severe implications on the understanding of student's knowledge of content, and the learning context in general. The necessity to formalize the different content knowledge in approximate senses is recognized in the education research literature. A related problem is that of coherent formalizability. Existing responsive or smart AI-based software systems do not concern themselves with meaning, and trained ones are replete with their own issues. In the present research, many issues in modeling teachers' understanding of content are identified, and a two-tier rough set-based model is proposed by the present author for the purpose of developing software that can aid the varied tasks of a teacher. The main advantage of the proposed approach is in its ability to coherently handle vagueness, granularity and multi-modality. An extended example to equational reasoning is used to demonstrate these. The paper is meant for rough set researchers intending to build logical models or develop meaning-aware AI-software to aid teachers, and education research experts.


Non-deterministic approximation operators: ultimate operators, semi-equilibrium semantics and aggregates (full version)

Heyninck, Jesse, Bogaerts, Bart

arXiv.org Artificial Intelligence

Approximation fixpoint theory (AFT) is an abstract and general algebraic framework for studying the semantics of non-monotonic logics. In recent work, AFT was generalized to non-deterministic operators, i.e. operators whose range are sets of elements rather than single elements. In this paper, we make three further contributions to non-deterministic AFT: (1) we define and study ultimate approximations of nondeterministic operators, (2) we give an algebraic formulation of the semi-equilibrium semantics by Amendola, et al., and (3) we generalize the characterisations of disjunctive logic programs to disjunctive logic programs with aggregates. This is an extended version of our paper that will be presented at ICLP 2023 and will appear in the special issue of TPLP with the ICLP proceedings.


Non-Deterministic Approximation Fixpoint Theory and Its Application in Disjunctive Logic Programming

Heyninck, Jesse, Arieli, Ofer, Bogaerts, Bart

arXiv.org Artificial Intelligence

Semantics of various formalisms for knowledge representation can often be described by fixpoints of corresponding operators. For example, in many logics theories of a set of formulas can be seen as fixpoints of the underlying consequence operator [52]. Likewise, in logic programming, default logic or formal argumentation, all the major semantics can be formulated as different types of fixpoints of the same operator (see [22]). Such operators are usually non-monotonic, and so one cannot always be sure whether their fixpoints exist, and how they can be constructed. In order to deal with this'illusive nature' of the fixpoints, Denecker, Marek and Truszczyński [22] introduced a method for approximating each value z of the underlying operator by a pair of elements (x, y). These elements intuitively represent lower and upper bounds on z, and so a corresponding approximation operator for the original, non-monotonic operator, is constructed. If the approximating operator that is obtained is precision-monotonic, intuitively meaning that more precise inputs of the operator give rise to more precise outputs, then by Tarski and Knaster's Fixpoint Theorem the approximating operator has fixpoints that can be constructively computed, and which in turn approximate the fixpoints of the approximated operator, if such fixpoints exist. The usefulness of the algebraic theory that underlies the computation process described above was demonstrated on several knowledge representation formalisms, such as propositional logic programming [20], default logic [23], autoepistemic logic [23], abstract argumentation and abstract dialectical frameworks [50], hybrid MKNF [39], the graph description language SCHACL [11], and active integrity constraints [10], each one of which was shown to be an instantiation of this abstract theory of approximation.


Granular Generalized Variable Precision Rough Sets and Rational Approximations

Mani, A, Mitra, Sushmita

arXiv.org Artificial Intelligence

Rational approximations are introduced and studied in granular graded rough sets and generalizations thereof by the first author in recent research papers. The concept of rationality is determined by related ontologies and coherence between granularity, mereology and approximations in the context. In addition, a framework for rational approximations is introduced by her in the mentioned paper(s). Granular approximations constructed as per the procedures of variable precision rough sets (VPRS) are likely to be more rational than those constructed from a classical perspective under certain conditions. This may continue to hold for some generalizations of the former. However, a formal characterization of such conditions is not available in the previously published literature. In this research, theoretical aspects of the problem are critically examined, uniform generalizations of granular VPRS are introduced, new connections with granular graded rough sets are proved, appropriate concepts of substantial parthood are introduced, their extent of compatibility with the framework is accessed, and the framework is extended. Basic assumptions are explained in detail, and additional examples are constructed for readability. Furthermore, meta applications to cluster validation, image segmentation and dynamic sorting are invented. Extensions to direct generalizations of VPRS such as probabilistic rough sets are a natural consequence of the work.


Approximability and Generalisation

Turner, Andrew J., Kabán, Ata

arXiv.org Machine Learning

Approximate learning machines have become popular in the era of small devices, including quantised, factorised, hashed, or otherwise compressed predictors, and the quest to explain and guarantee good generalisation abilities for such methods has just begun. In this paper we study the role of approximability in learning, both in the full precision and the approximated settings of the predictor that is learned from the data, through a notion of sensitivity of predictors to the action of the approximation operator at hand. We prove upper bounds on the generalisation of such predictors, yielding the following main findings, for any PAC-learnable class and any given approximation operator. 1) We show that under mild conditions, approximable target concepts are learnable from a smaller labelled sample, provided sufficient unlabelled data. 2) We give algorithms that guarantee a good predictor whose approximation also enjoys the same generalisation guarantees. 3) We highlight natural examples of structure in the class of sensitivities, which reduce, and possibly even eliminate the otherwise abundant requirement of additional unlabelled data, and henceforth shed new light onto what makes one problem instance easier to learn than another. These results embed the scope of modern model compression approaches into the general goal of statistical learning theory, which in return suggests appropriate algorithms through minimising uniform bounds.


A note on belief structures and S-approximation spaces

Shakiba, Ali, Goharshady, Amir Kafshdar, Hooshmandasl, MohammadReza, Meybodi, Mohsen Alambardar

arXiv.org Artificial Intelligence

We study relations between evidence theory and S-approximation spaces. Both theories have their roots in the analysis of Dempster's multivalued mappings and lower and upper probabilities and have close relations to rough sets. We show that an S-approximation space, satisfying a monotonicity condition, can induce a natural belief structure which is a fundamental block in evidence theory. We also demonstrate that one can induce a natural belief structure on one set, given a belief structure on another set if those sets are related by a partial monotone S-approximation space.


Connectedness of graphs and its application to connected matroids through covering-based rough sets

Huang, Aiping, Zhu, William

arXiv.org Artificial Intelligence

Graph theoretical ideas are highly utilized by computer science fields especially data mining. In this field, a data structure can be designed in the form of tree. Covering is a widely used form of data representation in data mining and covering-based rough sets provide a systematic approach to this type of representation. In this paper, we study the connectedness of graphs through covering-based rough sets and apply it to connected matroids. First, we present an approach to inducing a covering by a graph, and then study the connectedness of the graph from the viewpoint of the covering approximation operators. Second, we construct a graph from a matroid, and find the matroid and the graph have the same connectedness, which makes us to use covering-based rough sets to study connected matroids. In summary, this paper provides a new approach to studying graph theory and matroid theory.


Characteristic matrix of covering and its application to boolean matrix decomposition and axiomatization

Wang, Shiping, Zhu, Qingxin, Zhu, William, Min, Fan

arXiv.org Artificial Intelligence

Covering is an important type of data structure while covering-based rough sets provide an efficient and systematic theory to deal with covering data. In this paper, we use boolean matrices to represent and axiomatize three types of covering approximation operators. First, we define two types of characteristic matrices of a covering which are essentially square boolean ones, and their properties are studied. Through the characteristic matrices, three important types of covering approximation operators are concisely equivalently represented. Second, matrix representations of covering approximation operators are used in boolean matrix decomposition. We provide a sufficient and necessary condition for a square boolean matrix to decompose into the boolean product of another one and its transpose. And we develop an algorithm for this boolean matrix decomposition. Finally, based on the above results, these three types of covering approximation operators are axiomatized using boolean matrices. In a word, this work borrows extensively from boolean matrices and present a new view to study covering-based rough sets.

  Country: Asia > China > Sichuan Province > Chengdu (0.04)
  Genre: Research Report (0.40)